learning search space partition
Learning Search Space Partition for Black-box Optimization using Monte Carlo Tree Search
High dimensional black-box optimization has broad applications but remains a challenging problem to solve. Given a set of samples x i, building a global model (like Bayesian Optimization (BO)) suffers from the curse of dimensionality in the high-dimensional search space, while a greedy search may lead to sub-optimality. By recursively splitting the search space into regions with high/low function values, recent works like LaNAS shows good performance in Neural Architecture Search (NAS), reducing the sample complexity empirically. In this paper, we coin LA-MCTS that extends LaNAS to other domains. Unlike previous approaches, LA-MCTS learns the partition of the search space using a few samples and their function values in an online fashion. While LaNAS uses linear partition and performs uniform sampling in each region, our LA-MCTS adopts a nonlinear decision boundary and learns a local model to pick good candidates. If the nonlinear partition function and the local model fits well with ground-truth black-box function, then good partitions and candidates can be reached with much fewer samples. LA-MCTS serves as a meta-algorithm by using existing black-box optimizers (e.g., BO, TuRBO as its local models, achieving strong performance in general black-box optimization and reinforcement learning benchmarks, in particular for high-dimensional problems.
Review for NeurIPS paper: Learning Search Space Partition for Black-box Optimization using Monte Carlo Tree Search
Weaknesses: While the general idea of the paper is appealing and has been evaluated extensively, the presentation of the methodology is lacking in clarity at times. After reading section 3, some issues could have been addressed more clearly: • Regarding line 171/172: what do the authors mean by "regret reaches the plateau"? Consider the case of the 1D sine function and we have data points only at increments of pi. K-means would result in two clusters, i.e., the points with values 1 and -1, respectively. What would be the resulting domain for TuRBO then?
Review for NeurIPS paper: Learning Search Space Partition for Black-box Optimization using Monte Carlo Tree Search
The paper contains some interesting ideas of partitioning the search space in Bayesian optimization. Experimental studies are good, although the paper could benefit from another round of editing for the camera ready. Reviewers were not overly excited about the paper, but also did not identify any fundamental flaws. As such, it is recommended for acceptance as a poster. We strongly encourage the authors to take the feedback from the reviews into account when preparing the camera-ready version.
Learning Search Space Partition for Black-box Optimization using Monte Carlo Tree Search
High dimensional black-box optimization has broad applications but remains a challenging problem to solve. Given a set of samples xi, yi, building a global model (like Bayesian Optimization (BO)) suffers from the curse of dimensionality in the high-dimensional search space, while a greedy search may lead to sub-optimality. By recursively splitting the search space into regions with high/low function values, recent works like LaNAS shows good performance in Neural Architecture Search (NAS), reducing the sample complexity empirically. In this paper, we coin LA-MCTS that extends LaNAS to other domains. Unlike previous approaches, LA-MCTS learns the partition of the search space using a few samples and their function values in an online fashion.